Goto

Collaborating Authors

 open question


Is Good Taste a Trap?

The New Yorker

Is Good Taste a Trap? The judgments we use to elevate our lives can also hem them in. In Belle Burden's memoir, " Strangers," she describes the end of her marriage. It happened suddenly: until learning of her husband's infidelity, through a voice mail from a stranger, she had no idea anything was wrong. Burden and her husband shared an apartment in Tribeca and a house on Martha's Vineyard.


Constant Regret, Generalized Mixability, and Mirror Descent

Neural Information Processing Systems

We consider the setting of prediction with expert advice; a learner makes predictions by aggregating those of a group of experts. Under this setting, and for the right choice of loss function and ``mixing'' algorithm, it is possible for the learner to achieve a constant regret regardless of the number of prediction rounds. For example, a constant regret can be achieved for \emph{mixable} losses using the \emph{aggregating algorithm}. The \emph{Generalized Aggregating Algorithm} (GAA) is a name for a family of algorithms parameterized by convex functions on simplices (entropies), which reduce to the aggregating algorithm when using the \emph{Shannon entropy} $\operatorname{S}$. For a given entropy $\Phi$, losses for which a constant regret is possible using the \textsc{GAA} are called $\Phi$-mixable. Which losses are $\Phi$-mixable was previously left as an open question. We fully characterize $\Phi$-mixability and answer other open questions posed by \cite{Reid2015}. We show that the Shannon entropy $\operatorname{S}$ is fundamental in nature when it comes to mixability; any $\Phi$-mixable loss is necessarily $\operatorname{S}$-mixable, and the lowest worst-case regret of the \textsc{GAA} is achieved using the Shannon entropy. Finally, by leveraging the connection between the \emph{mirror descent algorithm} and the update step of the GAA, we suggest a new \emph{adaptive} generalized aggregating algorithm and analyze its performance in terms of the regret bound.


Implicit Regularization in Deep Learning May Not Be Explainable by Norms

Neural Information Processing Systems

Mathematically characterizing the implicit regularization induced by gradient-based optimization is a longstanding pursuit in the theory of deep learning. A widespread hope is that a characterization based on minimization of norms may apply, and a standard test-bed for studying this prospect is matrix factorization (matrix completion via linear neural networks). It is an open question whether norms can explain the implicit regularization in matrix factorization. The current paper resolves this open question in the negative, by proving that there exist natural matrix factorization problems on which the implicit regularization drives all norms (and quasi-norms) towards infinity. Our results suggest that, rather than perceiving the implicit regularization via norms, a potentially more useful interpretation is minimization of rank. We demonstrate empirically that this interpretation extends to a certain class of non-linear neural networks, and hypothesize that it may be key to explaining generalization in deep learning.


How to Reclaim Your Mind

The New Yorker

Can You Reclaim Your Mind? To feel mentally alive, you have to do more than defeat distraction. Looking back over the columns I've written in 2025, I can see that a lot of them, broadly construed, have been about reclaiming one's mind. I wrote about living in the present, picturing the future, and exploring one's memories; about reading, learning, and making the most of one's spare time; and about whether artificial intelligence will end up expanding our thinking or limiting it . The shared subject was resistance to the forces, malevolent or inertial, that can render us mentally exhausted and scattered.


Is A.I. Actually a Bubble?

The New Yorker

Is A.I. Actually a Bubble? The narrative of boom and bust is familiar--but also out of step with the possibilities of a new technology. Over the past few months, I've introduced artificial intelligence into the hobby life of my seven-year-old son, Peter. On Saturdays, he takes a coding class, in which he recently made a version of rock-paper-scissors, and he really wants to make more sophisticated games at home. I gave ChatGPT and Claude a sense of his skill level, and they instantaneously suggested next steps. Claude proposed trying to recreate Pong in Scratch, a coding environment for kids.


If You Quit Social Media, Will You Read More Books?

The New Yorker

Books are inefficient, and the internet is training us to expect optimized experiences. Here's a thought many of us have these days: if only we weren't on our damn phones all the time, we would surely unlock a better self--one that went on hikes and talked more with our children and felt less rank jealousy about other people's successes. It's a nice idea; once a day, at least, I wonder what my life would be like if I smashed my phone into bits and never contacted AppleCare. Would I become a scratch golfer or one of those fathers who does thousand-piece puzzles with his children? Would I at least read more difficult novels?


Constant Regret, Generalized Mixability, and Mirror Descent

Neural Information Processing Systems

We consider the setting of prediction with expert advice; a learner makes predictions by aggregating those of a group of experts. Under this setting, and for the right choice of loss function and ``mixing'' algorithm, it is possible for the learner to achieve a constant regret regardless of the number of prediction rounds. For example, a constant regret can be achieved for \emph{mixable} losses using the \emph{aggregating algorithm}. The \emph{Generalized Aggregating Algorithm} (GAA) is a name for a family of algorithms parameterized by convex functions on simplices (entropies), which reduce to the aggregating algorithm when using the \emph{Shannon entropy} $\operatorname{S}$. For a given entropy $\Phi$, losses for which a constant regret is possible using the \textsc{GAA} are called $\Phi$-mixable. Which losses are $\Phi$-mixable was previously left as an open question. We fully characterize $\Phi$-mixability and answer other open questions posed by \cite{Reid2015}. We show that the Shannon entropy $\operatorname{S}$ is fundamental in nature when it comes to mixability; any $\Phi$-mixable loss is necessarily $\operatorname{S}$-mixable, and the lowest worst-case regret of the \textsc{GAA} is achieved using the Shannon entropy. Finally, by leveraging the connection between the \emph{mirror descent algorithm} and the update step of the GAA, we suggest a new \emph{adaptive} generalized aggregating algorithm and analyze its performance in terms of the regret bound.